1.2K Downloads
A slightly larger 12B parameter model from Mistral AI, NeMo offers a long 128k token context length, advanced world knowledge, and function calling for developers.
Trained for Ttool use
Last Updated 9 days ago
Mistral Nemo was trained up to 128k context, but supports extra with potentially reduced quality.
This model has amazing performance across a series of benchmarks including multilingual.
For more details, check the blog post here: https://mistral.ai/news/mistral-nemo/
The underlying model files this model uses
Based on
When you download this model, LM Studio picks the source that will best suit your machine (you can override this)
Custom configuration options included with this model